EN FR
EN FR


Section: New Results

Computer Graphics

Multi-Image Based Photon Tracing for Interactive Global Illumination of Dynamic Scenes

Participants : Chunhui Yao, Bin Wang, Bin Chan, Junhai Yong, Jean-Claude Paul.

Image space photon mapping has the advantage of simple implementation on GPU without pre-computation of complex acceleration structures. However, existing approaches use only a single image for tracing caustic photons, so they are limited to computing only a part of the global illumination effects for very simple scenes. In this paper we fully extend the image space approach by using multiple environment maps for photon mapping computation to achieve interactive global illumination of dynamic complex scenes. The two key problems due to the introduction of multiple images are 1) selecting the images to ensure adequate scene coverage; and 2) reliably computing raygeometry intersections with multiple images. We present effective solutions to these problems and show that, with multiple environment maps, the image-space photon mapping approach can achieve interactive global illumination of dynamic complex scenes. The advantages of the method are demonstrated by comparison with other existing interactive global illumination methods [10] .

Quality Solid Texture Synthesis using Position and Index Histogram Matching

Participants : Jiating Chen, Bin Wang.

The synthesis quality is one of the most important aspects in solid texture synthesis algorithms. In recent years several methods are proposed to generate high quality solid textures. However, these existing methods often suffer from the synthesis artifacts such as blurring, missing texture structures, introducing aberrant voxel colors, and so on. In this paper, we introduce a novel algorithm for synthesizing high quality solid textures from 2D exemplars. We first analyze the relevant factors for further improvements of the synthesis quality, and then adopt an optimization framework with the k-coherence search and the discrete solver for solid texture synthesis. The texture optimization approach is integrated with two new kinds of histogram matching methods, position and index histogram matching, which effectively cause the global statistics of the synthesized solid textures to match those of the exemplars. Experimental results show that our algorithm outperforms or at least is comparable to the previous solid texture synthesis algorithms in terms of the synthesis quality  [44] .

Real-time rendering of heterogeneous translucent objects with arbitrary shapes

Participants : Yajun Wang, Jiaping Wang, Nicolas Holzschuch, Kartic Subr, Jun-Hai Yong, Baining Guo.

We present a real-time algorithm for rendering translucent objects of arbitrary shapes.We approximate the scattering of light inside the objects using the diffusion equation, which we solve on-the-fly using the GPU. Our algorithm is general enough to handle arbitrary geometry, heterogeneous materials, deformable objects and modifications of lighting, all in real-time. In a pre-processing step, we discretize the object into a regular 4-connected structure (QuadGraph). Due to its regular connectivity, this structure is easily packed into a texture and stored on the GPU. At runtime, we use the QuadGraph stored on the GPU to solve the diffusion equation, in real-time, taking into account the varying input conditions: Incoming light, object material and geometry. We handle deformable objects, provided the deformation does not change the topological structure of the objects  [58] .

Fast Local Color Transfer via Dominant Colors Mapping

Participants : Weiming Dong, Guanbo Bao, Xiaopeng Zhang, Jean-Claude Paul.

We present a novel algorithm to address the above issues. Our method establishes a tight connection between the local color statistics of the source and target images. All the obvious color features can be presented in the result  [45] .

Real-time watercolor illustrations and animation on GPU

Participants : Miao-Yi Wang, Bin Wang, Jun-Hai Yong.

This paper presents a real-time approach to render 3D scenes with the effects of watercolor on GPU. Most processes of the approach are implemented with image-space techniques. Our algorithm renders detail layer, ambient layer and stroke layer separately, and then combines them into final result. During the rendering processes, we use screen space ambient occlusion and shadow mapping to compute shadow in much shorter time, and we use image filter approach to simulate important effects of watercolor. Because our approach is mainly implemented with image-space techniques, it is convenient to use GPU to accelerate the rendering processes and finally our approach achieves real-time speed  [59] .

A Hierarchical Grid Based Framework for Fast Collision Detection

Participants : Wenshan Fan, Bin Wang, Jean-Claude Paul, Jiaguang Sun.

We present a novel hierarchical grid based method for fast collision detection (CD) for deformable models on GPU architecture. A two-level grid is employed to accommodate the non-uniform distribution of practical scene geometry. A bottom-to-top method is implemented to assign the triangles into the hierarchical grid without any iteration while a deferred scheme is introduced to efficiently update the data structure. To address the issue of load balancing, which greatly influences the performance in SIMD parallelism, a propagation scheme which utilizes a parallel scan and a segmented scan is presented, distributing workloads evenly across all concurrent threads. The proposed method supports both discrete collision detection (DCD) and continuous collision detection (CCD) with self-collision. Some typical benchmarks are tested to verify the effectiveness of our method. The results highlight our speedups over prior algorithms on different commodity GPUs [17] .

Improved Stochastic Progressive Photon Mapping with Metropolis Sampling

Participants : Jiating Chen, Bin Wang, Junhai Yong.

This paper presents an improvement to the stochastic progressive photon mapping (SPPM), a method for robustly simulating complex global illumination with distributed ray tracing effects. Normally, similar to photon mapping and other particle tracing algorithms, SPPM would become inefficient when the photons are poorly distributed. An inordinate amount of photons are required to reduce the error caused by noise and bias to acceptable levels. In order to optimize the distribution of photons, we propose an extension of SPPM with a Metropolis-Hastings algorithm, effectively exploiting local coherence among the light paths that contribute to the rendered image. A well-designed scalar contribution function is introduced as our Metropolis sampling strategy, targeting at specific parts of image areas with large error to improve the efficiency of the radiance estimator. Experimental results demonstrate that the new Metropolis sampling based approach maintains the robustness of the standard SPPM method, while significantly improving the rendering efficiency for a wide range of scenes with complex lighting [14] .

Efficient Depth-of-Field Rendering with Adaptive Sampling and Multiscale Reconstruction

Participants : Jiating Chen, Bin Wang, Yuxiang Wang, Ryan S. Overbeck, Junhai Yong, Wenping Wang.

Depth-of-field is one of the most crucial rendering effects for synthesizing photorealistic images. Unfortunately, this effect is also extremely costly. It can take hundreds to thousands of samples to achieve noise-free results using Monte Carlo integration. This paper introduces an efficient adaptive depth-of-field rendering algorithm that achieves noise-free results using significantly fewer samples. Our algorithm consists of two main phases: adaptive sampling and image reconstruction. In the adaptive sampling phase, the adaptive sample density is determined by a ‘blur-size’ map and ‘pixel-variance’ map computed in the initialization. In the image reconstruction phase, based on the blur-size map, we use a novel multiscale reconstruction filter to dramatically reduce the noise in the defocused areas where the sampled radiance has high variance. Because of the efficiency of this new filter, only a few samples are required. With the combination of the adaptive sampler and the multiscale filter, our algorithm renders near-reference quality depth-of-field images with significantly fewer samples than previous techniques [13] .

Real-Time Illumination of Complex Lights Using Divided Sampling

Participants : Chunhui Yao, Bin Wang, Junhai Yong.

Existing methods for real-time illumination of complex lights, either require long time pre-computation, or only focus on some special types of illumination. Because computing different kinds of illumination requires different sampling strategies, this paper introduces a novel efficient framework for rendering illumination of complex light sources, in which the pre-computation is not necessary. We divide the rendering equation into three parts: high-frequency term, low-frequency term and occlusion term. Each term is computed by a proper sampling strategy. High-frequency term is solved by importance sampling the BRDF, while low-frequency term is computed by importance sampling the light sources. Occlusion term is computed with depth information in screen-space, and the required number of samples is greatly reduced by interleaved sampling. Our framework is easy to implement on GPU and can solve many real-time rendering problems. We take real-time environment-map-lighting as an example for demonstrating applications of this framework. The results show that our technique can handle complete light effects with higher quality than previous works [31] .

Fast Multi-Operator Image Resizing and Evaluation

Participants : Weiming Dong, Guanbo Bao, Xiaopeng Zhang, Jean-Claude Paul.

Current multi-operator image resizing methods succeed in generating impressive results by using image similarity measure to guide the resizing process. An optimal operation path is found in the resizing space. However, their slow resizing speed caused by inefficient computation strategy of the bidirectional patch matching becomes a drawback for practical use. In this paper, we present a novel method to address this problem. By combining seam carving with scaling and cropping, our method can realize content-aware image resizing very fast. We define cost functions combing image energy and dominant color descriptor for all the operators to evaluate the damage to both local image content and global visual effect. Therefore our algorithm can automatically find an optimal sequence of operations to resize the image by dynamic programming or greedy algorithm. We also extend our algorithm to indirect image resizing which can protect the aspect ratio of the dominant object in an image [16] .

Real-time Volume Caustics with Image-based Photon Tracing

Participants : Yuxiang Wang, Bin Wang, Li Chen.

Rendering of volume caustics in participating media is often expensive, even with different acceleration approaches. Basic volume photon tracing is used to render such effects, but rather slow due to its massive quantity of photons to be traced. In this paper we present an image-based volume photon tracing method for rendering volume caustics at real-time frame rates. Motivated by multi-image based photon tracing, our technique uses multiple depth maps to accelerate the intersection test procedure, achieving a plausible and fast rendering of volume caustics. Each photon dynamically selects the depth map layer for intersection test, and the test converges to an approximate solution using image space methods in a few recursions. This allows us to compute photon distribution in participating media while avoiding massive computation on accurate intersection tests with scene geometry. We demonstrate that our technique, combined with photon splatting techniques , is able to render volume caustics caused by multiple refractions [39] .

Parallel Spacial Hashing for Collision Detection of Deformable Surfaces

Participants : Wenshan Fan, Bin Wang, Jianliang Zhou, Jiaguang Sun.

We present a fast collision detection method for deformable surfaces with parallel spatial hashing on GPU architecture. The efficient update and access of the uniform grid are exploited to accelerate the performance in our method. To deal with the inflexible memory system, which makes the building of stream data a challenging task on GPU, we propose to subdivide the whole workload into irregular segments and design an efficient evaluation algorithm, which employs parallel scan and stream compaction, to build the stream data in parallel. The load balancing is a key aspect that needs to be considered in the SIMD parallelism. We break the heavy and irregular collision computation down into lightweight part and heavyweight part, ensuring the later perfectly run in load balancing manner with each concurrent thread processes just a single collision. In practice, our approach can perform collision detection in tens of milliseconds on a PC with NVIDIA GTX 260 graphics card on benchmarks composed of millions of triangles. The results highlight our speedups over prior CPU-based and GPU-based algorithms [32] .

Distribution-Aware Image Color Transfer

Participants : Fuzhang Wu, Weiming Dong, Xing Mei, Xiaopeng Zhang, Xiaohong Jia, Jean-Claude Paul.

Color transfer is a practical image editing technology which is useful in various applications. An ideal color transfer algorithm should keep the scene in the source image and apply the color styles of the reference image. All the dominant color styles of the reference image should be presented in the result especially when there are similar contents in the source and reference images. We propose a robust color transfer framework to address the above issues. Our method can establish a soft connections between the local color statistics of the source and reference images. All the obvious color features can be presented in the result image, as well as the spatial distribution of the reference color pattern [40] .

Translucent Material Transfer Based on Single Images

Participants : Chao Li, Weiming Dong, Ning Zhou, Xiaopeng Zhang, Jean-Claude Paul.

Extraction and re-rendering of real materials give large contributions to various image-based applications. As one of the key properties of modeling the appearance of an object, materials mainly focus on the effects caused by light transportation. Therefore, understanding the characteristics of a complex material from a single photograph and transferring it to an object in another image becomes a very challenging problem. In this paper, we present a novel framework to transfer real translucent materials such as fruits and flowers between single images. We define a group of information which can model the attributes during the extraction and transfer process. Once we extract this information from both the source and target images, we can easily produce a realistic photograph of an object with target-like materials and suitable shading effects in the environment of sources [37] .

Multicage Image Deformation On GPU

Participants : Weiliang Meng, Xiaopeng Zhang, Weiming Dong, Jean-Claude Paul.

As a linear blending method, cage-based deformation is widely used in various applications of image and geometry processing. In most cases especially in the interactive mode, deformation based on embedded cages does not work well as some of the coefficients are not continual and make the deformation discontinuous, which means existing “spring up” phenomenon. However, it’s common for us to deform the ROI(Region of Interest) while keeping local part untouched or with only small adjustments. In this paper, we design a scheme to solve the above problem. A multicage can be generated manually or automatically, and the image deformation can be adjusted intelligently according to the local cage shape to preserve important details. On the other hand, we don’t need to care about the pixels’ position relative to the multicage. All the pixels go through the same process, and this will save a lot of time. We also design a packing method for cage coordinates to pack all the necessary coefficents into one texture. A vertex shader can be used to accelerate the deformation process, leading to realtime deformation even for large images [38] .

Real-time volume caustics with image-based photon tracing

Participants : Yuxiang Wang, Bin Wang, Li Chen.

Rendering of volume caustics in participating media is often expensive, even with different acceleration approaches. Basic volume photon tracing is used to render such effects, but rather slow due to its massive quantity of photons to be traced. In this paper we present an image-based volume photon tracing method for rendering volume caustics at real-time frame rates. Motivated by multi-image based photon tracing, our technique uses multiple depth maps to accelerate the intersection test procedure, achieving a plausible and fast rendering of volume caustics. Each photon dynamically selects the depth map layer for intersection test, and the test converges to an approximate solution using image space methods in a few recursions. This allows us to compute photon distribution in participating media while avoiding massive computation on accurate intersection tests with scene geometry. We demonstrate that our technique, combined with photon splatting techniques , is able to render volume caustics caused by multiple refractions [39] .

Interactive Visual Simulation of Dynamic Ink Diffusion Effects

Participants : Shibiao Xu, Xing Mei, Weiming Dong, Zhiyi Zhang, Xiaopeng Zhang.

This paper presents an effective method that simulates the ink diffusion process with visual plausible effects and real-time performance. Our algorithm updates the dynamic ink volume with a hybrid grid-particle representation: the fluid velocity field is calculated with a low-resolution grid structure, while the highly detailed ink effects are controlled and visualized with the particles. We propose an improved ink rendering method with particle sprites and motion blur techniques. The simulation and the rendering processes are efficiently implemented on graphics hardware for interactive frame rates. Compared to traditional simulation methods that treat water and ink as two mixable fluids, our method is simple but effective: it captures various ink effects such as pinned boundary [Chu and Tai 2005] and filament pattern [Shiny et al. 2010] with real-time performance; it allows easy interaction with the artists; it includes basic solid-fluid interaction. We believe that our method is attractive for industrial animation and art design [41] .